12 research outputs found
Machine Learning for Human Activity Detection in Smart Homes
Recognizing human activities in domestic environments from audio and active power consumption sensors is a challenging task since on the one hand, environmental sound signals are multi-source, heterogeneous, and varying in time and on the other hand, the active power consumption varies significantly for similar type electrical appliances.
Many systems have been proposed to process environmental sound signals for event detection in ambient assisted living applications. Typically, these systems use feature extraction, selection, and classification. However, despite major advances, several important questions remain unanswered, especially in real-world settings. A part of this thesis contributes to the body of knowledge in the field by addressing the following problems for ambient sounds recorded in various real-world kitchen environments: 1) which features, and which classifiers are most suitable in the presence of background noise? 2) what is the effect of signal duration on recognition accuracy? 3) how do the SNR and the distance between the microphone and the audio source affect the recognition accuracy in an environment in which the system was not trained? We show that for systems that use traditional classifiers, it is beneficial to combine gammatone frequency cepstral coefficients and discrete wavelet transform coefficients and to use a gradient boosting classifier. For systems based on deep learning, we consider 1D and 2D CNN using mel-spectrogram energies and mel-spectrograms images, as inputs, respectively and show that the 2D CNN outperforms the 1D CNN. We obtained competitive classification results for two such systems and validated the performance of our algorithms on public datasets (Google Brain/TensorFlow Speech Recognition Challenge and the 2017 Detection and Classification of Acoustic Scenes and Events Challenge).
Regarding the problem of the energy-based human activity recognition in a household environment, machine learning techniques to infer the state of household appliances from their energy consumption data are applied and rule-based scenarios that exploit these states to detect human activity are used. Since most activities within a house are related with the operation of an electrical appliance, this unimodal approach has a significant advantage using inexpensive smart plugs and smart meters for each appliance. This part of the thesis proposes the use of unobtrusive and easy-install tools (smart plugs) for data collection and a decision engine that combines energy signal classification using dominant classifiers (compared in advanced with grid search) and a probabilistic measure for appliance usage. It helps preserving the privacy of the resident, since all the activities are stored in a local database.
DNNs received great research interest in the field of computer vision. In this thesis we adapted different architectures for the problem of human activity recognition. We analyze the quality of the extracted features, and more specifically how model architectures and parameters affect the ability of the automatically extracted features from DNNs to separate activity classes in the final feature space. Additionally, the architectures that we applied for our main problem were also applied to text classification in which we consider the input text as an image and apply 2D CNNs to learn the local and global semantics of the sentences from the variations of the visual patterns of words. This work helps as a first step of creating a dialogue agent that would not require any natural language preprocessing.
Finally, since in many domestic environments human speech is present with other environmental sounds, we developed a Convolutional Recurrent Neural Network, to separate the sound sources and applied novel post-processing filters, in order to have an end-to-end noise robust system. Our algorithm ranked first in the Apollo-11 Fearless Steps Challenge.Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 676157, project ACROSSIN
Two-Dimensional Convolutional Recurrent Neural Networks for Speech Activity Detection
Speech Activity Detection (SAD) plays an important role in mobile communications and automatic speech recognition (ASR). Developing efficient SAD systems for real-world applications is a challenging task due to the presence of noise. We propose a new approach to SAD where we treat it as a two-dimensional multilabel image classification problem. To classify the audio segments, we compute their Short-time Fourier Transform spectrograms and classify them with a Convolutional Recurrent Neural Network (CRNN), traditionally used in image recognition. Our CRNN uses a sigmoid activation function, max-pooling in the frequency domain, and a convolutional operation as a moving average filter to remove misclassified spikes. On the development set of Task 1 of the 2019 Fearless Steps Challenge, our system achieved a decision cost function (DCF) of 2.89%, a 66.4% improvement over the baseline. Moreover, it achieved a DCF score of 3.318% on the evaluation dataset of the challenge, ranking first among all submissions
Energy-based decision engine for household human activity recognition
We propose a framework for energy-based human
activity recognition in a household environment. We apply
machine learning techniques to infer the state of household
appliances from their energy consumption data and use rulebased
scenarios that exploit these states to detect human activity.
Our decision engine achieved a 99.1% accuracy for real-world
data collected in the kitchens of two smart homes
Comparing CNN and Human Crafted Features for Human Activity Recognition
Deep learning techniques such as Convolutional
Neural Networks (CNNs) have shown good results in activity
recognition. One of the advantages of using these methods resides
in their ability to generate features automatically. This ability
greatly simplifies the task of feature extraction that usually
requires domain specific knowledge, especially when using big
data where data driven approaches can lead to anti-patterns.
Despite the advantage of this approach, very little work has
been undertaken on analyzing the quality of extracted features,
and more specifically on how model architecture and parameters
affect the ability of those features to separate activity classes
in the final feature space. This work focuses on identifying the
optimal parameters for recognition of simple activities applying
this approach on both signals from inertial and audio sensors.
The paper provides the following contributions: (i) a comparison
of automatically extracted CNN features with gold standard
Human Crafted Features (HCF) is given, (ii) a comprehensive
analysis on how architecture and model parameters affect separation
of target classes in the feature space. Results are evaluated
using publicly available datasets. In particular, we achieved a
93.38% F-Score on the UCI-HAR dataset, using 1D CNNs with
3 convolutional layers and 32 kernel size, and a 90.5% F-Score
on the DCASE 2017 development dataset, simplified for three
classes (indoor, outdoor and vehicle), using 2D CNNs with 2
convolutional layers and a 2x2 kernel size
Audio Content Analysis for Unobtrusive Event Detection in Smart Homes
Institute of Engineering Sciences
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Environmental sound signals are multi-source, heterogeneous, and varying in time. Many systems have been proposed to process such signals for event detection in ambient assisted living applications. Typically, these systems use feature extraction, selection, and classification. However, despite major advances, several important questions remain unanswered, especially in real-world settings. This paper contributes to the body of knowledge in the field by addressing the following problems for ambient sounds recorded in various real-world kitchen environments: 1) which features and which classifiers are most suitable in the
presence of background noise? 2) what is the effect of signal duration on recognition accuracy? 3) how do the signal-to-noise-ratio and the distance between the microphone and the audio source affect the recognition accuracy in an environment in which the system was not trained? We show that for systems that use traditional classifiers, it is beneficial to combine gammatone frequency cepstral coefficients and discrete wavelet transform coefficients and to use a gradient boosting classifier. For systems based on deep learning, we consider 1D and 2D Convolutional Neural Networks (CNN) using mel-spectrogram energies
and mel-spectrograms images, as inputs, respectively and show that the 2D CNN outperforms the 1D CNN. We obtained competitive classification results for two such systems. The first one, which uses a gradient boosting classifier,
achieved an F1-Score of 90.2% and a recognition accuracy of 91.7%. The second
one, which uses a 2D CNN with mel-spectrogram images, achieved an F1-Score
of 92.7% and a recognition accuracy of 96%
Image-based Text Classification using 2D Convolutional Neural Networks
We propose a new approach to text classification
in which we consider the input text as an image and apply
2D Convolutional Neural Networks to learn the local and
global semantics of the sentences from the variations of the
visual patterns of words. Our approach demonstrates that
it is possible to get semantically meaningful features from
images with text without using optical character recognition
and sequential processing pipelines, techniques that traditional
natural language processing algorithms require. To validate
our approach, we present results for two applications: text
classification and dialog modeling. Using a 2D Convolutional
Neural Network, we were able to outperform the state-ofart
accuracy results for a Chinese text classification task and
achieved promising results for seven English text classification
tasks. Furthermore, our approach outperformed the memory
networks without match types when using out of vocabulary
entities from Task 4 of the bAbI dialog dataset
Audio-Based Event Detection at Different SNR Settings Using Two-Dimensional Spectrogram Magnitude Representations
Audio-based event detection poses a number of different challenges that are not encountered in other fields, such as image detection. Challenges such as ambient noise, low Signal-to-Noise Ratio (SNR) and microphone distance are not yet fully understood. If the multimodal approaches are to become better in a range of fields of interest, audio analysis will have to play an integral part. Event recognition in autonomous vehicles (AVs) is such a field at a nascent stage that can especially leverage solely on audio or can be part of the multimodal approach. In this manuscript, an extensive analysis focused on the comparison of different magnitude representations of the raw audio is presented. The data on which the analysis is carried out is part of the publicly available MIVIA Audio Events dataset. Single channel Short-Time Fourier Transform (STFT), mel-scale and Mel-Frequency Cepstral Coefficients (MFCCs) spectrogram representations are used. Furthermore, aggregation methods of the aforementioned spectrogram representations are examined; the feature concatenation compared to the stacking of features as separate channels. The effect of the SNR on recognition accuracy and the generalization of the proposed methods on datasets that were both seen and not seen during training are studied and reported.
Document type: Articl
Audio-based Event Recognition System for Smart Homes
Building an acoustic-based event recognition system
for smart homes is a challenging task due to the lack of
high-level structures in environmental sounds. In particular,
the selection of effective features is still an open problem.
We make an important step toward this goal by showing that
the combination of Mel-Frequency Cepstral Coefficients, Zero-
Crossing Rate, and Discrete Wavelet Transform features can
achieve an F1 score of 96.5% and a recognition accuracy of
97.8% with a gradient boosting classifier for ambient sounds
recorded in a kitchen environment
Recommended from our members
Aerial Networking: Creating a Resilient Wireless Network for Multiple Unmanned Aerial Vehicles
The goal of this report is to design the groundwork of a wireless communications system between several Unmanned Aerial Vehicles (UAVs) that will help conduct Search and Rescue (SAR) missions. UAVs could help with these missions because they can provide aerial reconnaissance at low cost and risk. To maximize efficiency, the architecture of our ad hoc network includes several UAVs with cameras (drones) relaying their data through a central UAV called a "mothership." Our specific objectives, which we successfully met, were to demonstrate the feasibility of such a network in the laboratory and to lay the groundwork for the physical implementation of the system, including the assembly of a motherboard and Wi-Fi transmitters that will perform the communication between the user and UAVs
Recommended from our members
Methodology for Assessing Impact of the Federation of Earth Science Information Partnerships
The Federation of Earth Science Information Partnerships (ESIP) is a consortium of partners that collect earth data from satellites and sensors but they do not have an effective way of obtaining performance indicators about their organization. We analyzed ESIP's website through the use of software systems such as Google Analytics. The results obtained from this IQP were used to help ESIP justify its importance to current funding sources, including NASA and NOAA. We gave recommendations to update the website, use web analytic software, keep good relationships with partners, work with USGEO, and supply a better monthly performance report.